Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Hayden Field Lauren Feiner"


3 mentions found


Many civil society leaders told CNBC the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. "One of the thrusts of the executive order is definitely that 'AI can improve governmental administration, make our lives better and we don't want to stand in way of innovation,'" Venzke told CNBC. Mitchell wished she had seen "foresight" approaches highlighted in the executive order, such as disaggregated evaluation approaches, which can analyze a model as data is scaled up. Even experts who praised the executive order's scope believe the work will be incomplete without action from Congress. For example, it seeks to work within existing immigration law to make it easier to retain high-skilled AI workers in the U.S.
Persons: Kamala Harris applauds, Joe Biden, Maya Wiley, Biden, Kamala Harris, Cody Venzke, Venzke, Margaret Mitchell, Mitchell, Joy Buolamwini, Divyansh Kaushik, Kaushik Organizations: White, Conference, Civil, Human, CNBC, American Civil Liberties Union, ACLU, League, Federation of American Locations: Washington ,, Washington , DC, New York, U.S
President Joe Biden unveiled a new executive order on artificial intelligence — the U.S. government's first action of its kind — requiring new safety assessments, equity and civil rights guidance and research on AI's impact on the labor market. Working with international partners to implement AI standards around the world. to implement AI standards around the world. It also comes ahead of the an AI safety summit hosted by the U.K.. President Biden's executive order requires that large companies share safety test results with the U.S. government before the official release of AI systems.
Persons: Joe Biden, government's, it's, Staff Bruce Reed, Biden's, Biden Organizations: Calif, White House, Commerce Department, Department of Health, Human Services, House, Staff, U.K, U.S, National Institute of Standards, Commerce, Sunday Locations: San Francisco, U.S, AI.gov
Google and OpenAI, two U.S. leaders in artificial intelligence, have opposing ideas about how the technology should be regulated by the government, a new filing reveals. Google is one of the leading developers of generative AI with its chatbot Bard, alongside Microsoft -backed OpenAI with its ChatGPT bot. While OpenAI CEO Sam Altman touted the idea of a new government agency focused on AI to deal with its complexities and license the technology, Google in its filing said it preferred a "multi-layered, multi-stakeholder approach to AI governance." "At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a 'Department of AI,'" Google wrote in its filing. "There is this question of should there be a new agency specifically for AI or not?"
Persons: Bard, Sam Altman, Emily M, Bender, Brad Smith, Greg Brockman, Ilya Sutskever, execs, Global Affairs Kent Walker, he's, Helen Toner, OpenAI Organizations: Google, National Telecommunications, Washington Post, Microsoft, National Institute of Standards, Technology, NIST, AI, FDA, University of Washington's Computational, Laboratory, Twitter, International Atomic Energy Agency, Post, Global Affairs, Georgetown's Center for Security, Emerging Technology, CNBC
Total: 3